111 research outputs found

    Methods for Wheel Slip and Sinkage Estimation in Mobile Robots

    Get PDF
    Future outdoor mobile robots will have to explore larger and larger areas, performing difficult tasks, while preserving, at the same time, their safety. This will primarily require advanced sensing and perception capabilities. Video sensors supply contact-free, precise measurements and are flexible devices that can be easily integrated with multi-sensor robotic platforms. Hence, they represent a potential answer to the need of new and improved perception capabilities for autonomous vehicles. One of the main applications of vision in mobile robotics is localization. For mobile robots operating on rough terrain, conventional dead reckoning techniques are not well suited, since wheel slipping, sinkage, and sensor drift may cause localization errors that accumulate without bound during the vehicle’s travel. Conversely, video sensors are exteroceptive devices, that is, they acquire information from the robot’s environment; therefore, vision-based motion estimates are independent of the knowledge of terrain properties and wheel-terrain interaction. Indeed, like dead reckoning, vision could lead to accumulation of errors; however, it has been proved that, compared to dead reckoning, it allows more accurate results and can be considered as a promising solution to the problem of robust robot positioning in high-slip environments. As a consequence, in the last few years, several localization methods using vision have been developed. Among them, visual odometry algorithms, based on the tracking of visual features over subsequent images, have been proved particularly effective. Accurate and reliable methods to sense slippage and sinkage are also desirable, since these effects compromise the vehicle’s traction performance, energy consumption and lead to gradual deviation of the robot from the intended path, possibly resulting in large drift and poor results of localization and control systems. For example, the use of conventional dead-reckoning technique is largely compromised, since it is based on the assumption that wheel revolutions can be translated into correspondent linear displacements. Thus, if one wheel slips, then the associated encoder will register revolutions even though these revolutions do not correspond to a linear displacement of the wheel. Conversely, if one wheel skids, fewer encoder pulses will be counted. Slippage and sinkage measurements are also valuable for terrain identification according to the classical terramechanics theory. This chapter investigates vision-based onboard technology to improve mobility of robots on natural terrain. A visual odometry algorithm and two methods for online measurement of vehicle slip angle and wheel sinkage, respectively, are discussed. Tests results are presented showing the performance of the proposed approaches using an all-terrain rover moving across uneven terrain

    Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts

    Get PDF
    In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system

    Vision-based Estimation of Slip Angle for Mobile Robots and Planetary Rovers

    Get PDF
    2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, May 19-23, 200

    Deep neural networks for grape bunch segmentation in natural images from a consumer-grade camera

    Get PDF
    AbstractPrecision agriculture relies on the availability of accurate knowledge of crop phenotypic traits at the sub-field level. While visual inspection by human experts has been traditionally adopted for phenotyping estimations, sensors mounted on field vehicles are becoming valuable tools to increase accuracy on a narrower scale and reduce execution time and labor costs, as well. In this respect, automated processing of sensor data for accurate and reliable fruit detection and characterization is a major research challenge, especially when data consist of low-quality natural images. This paper investigates the use of deep learning frameworks for automated segmentation of grape bunches in color images from a consumer-grade RGB-D camera, placed on-board an agricultural vehicle. A comparative study, based on the estimation of two image segmentation metrics, i.e. the segmentation accuracy and the well-known Intersection over Union (IoU), is presented to estimate the performance of four pre-trained network architectures, namely the AlexNet, the GoogLeNet, the VGG16, and the VGG19. Furthermore, a novel strategy aimed at improving the segmentation of bunch pixels is proposed. It is based on an optimal threshold selection of the bunch probability maps, as an alternative to the conventional minimization of cross-entropy loss of mutually exclusive classes. Results obtained in field tests show that the proposed strategy improves the mean segmentation accuracy of the four deep neural networks in a range between 2.10 and 8.04%. Besides, the comparative study of the four networks demonstrates that the best performance is achieved by the VGG19, which reaches a mean segmentation accuracy on the bunch class of 80.58%, with IoU values for the bunch class of 45.64%

    clustering and pca for reconstructing two perpendicular planes using ultrasonic sensors

    Get PDF
    In this paper, the authors make use of sonar transducers to detect the corner of two orthogonal panels and they propose a strategy for accurately reconstructing the surfaces. In order to point a linear array of four sensors at the desired position, the motion of a digital motor is appropriately controlled. When the sensors are directed towards the intersection between the planes, longer times of flight are observed because of multiple reflections. All the concerned distances have to be excluded and that is why an indicator based on the output signal energy is introduced. A clustering technique allows for the partitioning of the dataset in three clusters and the indicator selects the subset containing misrepresented information. The remaining distances are corrected so as to take into consideration the directivity and they permit the plotting of two sets of points in a three-dimensional space. In order to leave out the outliers, each set is filtered by means of a confidence ellipsoid which is defined by the Principal Component Analysis (PCA). The best-fit planes are obtained based on the principal directions and the variances. Experimental tests and results are shown demonstrating the effectiveness of this new approach

    Unevenness Point Descriptor for Terrain Analysis in Mobile Robot Applications

    Get PDF
    In recent years, the use of imaging sensors that produce a three-dimensional representation of the environment has become an efficient solution to increase the degree of perception of autonomous mobile robots. Accurate and dense 3D point clouds can be generated from traditional stereo systems and laser scanners or from the new generation of RGB-D cameras, representing a versatile, reliable and cost-effective solution that is rapidly gaining interest within the robotics community. For autonomous mobile robots, it is critical to assess the traversability of the surrounding environment, especially when driving across natural terrain. In this paper, a novel approach to detect traversable and non-traversable regions of the environment from a depth image is presented that could enhance mobility and safety through integration with localization, control and planning methods. The proposed algorithm is based on the analysis of the normal vector of a surface obtained through Principal Component Analysis and it leads to the definition of a novel, so defined, Unevenness Point Descriptor. Experimental results, obtained with vehicles operating in indoor and outdoor environments, are presented to validate this approach

    three different approaches for localization in a corridor environment by means of an ultrasonic wide beam

    Get PDF
    In this paper the authors present three methods to detect the position and orientation of an observer, such as a mobile robot, with respect to a corridor wall. They use an inexpensive sensor to spread a wide ultrasonic beam. The sensor is rotated by means of an accurate servomotor in order to propagate ultrasonic waves towards a regular wall. Whatever the wall material may be the scanning surface appears to be an acoustic reflector as a consequence of low air impedance. The realized device is able to give distance information in each motor position and thus permits the derivation of a set of points as a ray trace-scanner. The dataset contains points lying on a circular arc and relating to strong returns. Three different approaches are herein considered to estimate both the slope of the wall and its minimum distance from the sensor. Slope and perpendicular distance are the parameters of a target plane, which may be calculated in each observer's position to predict its new location. Experimental tests and simulations are shown and discussed by scanning from different stationary locations. They allow the appreciation of the effectiveness of the proposed approaches
    corecore